138 research outputs found

    Domain Control for Neural Machine Translation

    Full text link
    Machine translation systems are very sensitive to the domains they were trained on. Several domain adaptation techniques have been deeply studied. We propose a new technique for neural machine translation (NMT) that we call domain control which is performed at runtime using a unique neural network covering multiple domains. The presented approach shows quality improvements when compared to dedicated domains translating on any of the covered domains and even on out-of-domain data. In addition, model parameters do not need to be re-estimated for each domain, making this effective to real use cases. Evaluation is carried out on English-to-French translation for two different testing scenarios. We first consider the case where an end-user performs translations on a known domain. Secondly, we consider the scenario where the domain is not known and predicted at the sentence level before translating. Results show consistent accuracy improvements for both conditions.Comment: Published in RANLP 201

    Exciton photon strong-coupling regime for a single quantum dot in a microcavity

    Get PDF
    We report on the observation of the strong coupling regime between a single GaAs quantum dot and a microdisk optical mode. Photoluminescence is performed at various temperatures to tune the quantum dot exciton with respect to the optical mode. At resonance, we observe an anticrossing, signature of the strong coupling regime with a well resolved doublet. The Vacuum Rabi splitting amounts to 400 μeV and is twice as large as the individual linewidths.Comment: submitted on November 7th 200

    Generic and Specialized Word Embeddings for Multi-Domain Machine Translation

    Get PDF
    International audienceSupervised machine translation works well when the train and test data are sampled from the same distribution. When this is not the case, adaptation techniques help ensure that the knowledge learned from out-of-domain texts generalises to in-domain sentences. We study here a related setting, multi-domain adaptation, where the number of domains is potentially large and adapting separately to each domain would waste training resources. Our proposal transposes to neural machine translation the feature expansion technique of (Daum\'e III, 2007): it isolates domain-agnostic from domain-specific lexical representations, while sharing the most of the network across domains.Our experiments use two architectures and two language pairs: they show that our approach, while simple and computationally inexpensive, outperforms several strong baselines and delivers a multi-domain system that successfully translates texts from diverse sources

    Qualitative analysis of post-editing for high quality machine translation

    Get PDF
    In the context of massive adoption of Machine Translation (MT) by human localization services in Post-Editing (PE) workflows, we analyze the activity of post-editing high quality translations through a novel PE analysis methodology. We define and introduce a new unit for evaluating post-editing effort based on Post-Editing Action (PEA) - for which we provide human evaluation guidelines and propose a process to automatically evaluate these PEAs. We applied this methodology on data sets from two technologically different MT systems. In that context, we could show that more than 35% of the remaining effort can be saved by introducing of global PEA and edit propagation

    Scalable machine learning-assisted clear-box characterization for optimally controlled photonic circuits

    Full text link
    Photonic integrated circuits offer a compact and stable platform for generating, manipulating, and detecting light. They are instrumental for classical and quantum applications. Imperfections stemming from fabrication constraints, tolerances and operation wavelength impose limitations on the accuracy and thus utility of current photonic integrated devices. Mitigating these imperfections typically necessitates a model of the underlying physical structure and the estimation of parameters that are challenging to access. Direct solutions are currently lacking for mesh configurations extending beyond trivial cases. We introduce a scalable and innovative method to characterize photonic chips through an iterative machine learning-assisted procedure. Our method is based on a clear-box approach that harnesses a fully modeled virtual replica of the photonic chip to characterize. The process is sample-efficient and can be carried out with a continuous-wave laser and powermeters. The model estimates individual passive phases, crosstalk, beamsplitter reflectivity values and relative input/output losses. Building upon the accurate characterization results, we mitigate imperfections to enable enhanced control over the device. We validate our characterization and imperfection mitigation methods on a 12-mode Clements-interferometer equipped with 126 phase shifters, achieving beyond state-of-the-art chip control with an average 99.77 % amplitude fidelity on 100 implemented Haar-random unitary matrices

    Présentation

    Get PDF
    Lathématique abordée dans cette livraison de Mots est à mettre en liaison avec une réflexion sur les évènements du présent que nous avons engagée quelques semaines après les attaques suicides contre les Twin Towers et le Pentagone et le début des opérations militaires contre Al-Qaida et le régime des Talibans. Ces premières réflexions furent développées dans le cadre d’une école thématique du CNRS, organisée à l’ENS-LSH en novembre 2001, et dont un des axes était l’élucidation du rôle de la v..

    SYSTRAN Purely Neural MT Engines for WMT2017

    Full text link
    This paper describes SYSTRAN's systems submitted to the WMT 2017 shared news translation task for English-German, in both translation directions. Our systems are built using OpenNMT, an open-source neural machine translation system, implementing sequence-to-sequence models with LSTM encoder/decoders and attention. We experimented using monolingual data automatically back-translated. Our resulting models are further hyper-specialised with an adaptation technique that finely tunes models according to the evaluation test sentences.Comment: Published in WMT 201
    corecore